As predicted in AI and the privatisation of everything a couple of weeks ago, Peter Thiel’s Palantir operation has taken over the federated data service for the NHS. The Guardian reminds us of just how suitable a partnership this is for our troubled times. The NHS is sick. Peter Thiel believes that: ‘the NHS makes people sick’. The best solution is to ‘ just rip the whole thing from the ground and start over’.
The first thing Palantir will be ripping up is the data sharing opt-out that NHS patients have been using in their millions since it became clear that commercial interests might be able to access their health data for profit. Because nothing says ‘free market' like taking over a public service and forcing service users to hand over their data. In fact, Palantir (meaning ‘watch over from afar’) has little experience with free markets. Its data/AI/surveillance business was built on helping the US government to spy on people nationally and internationally and on supporting militarised border surveillance. So at least it’s staying close to its public service roots.
Something I did not see coming was another huge repository of health data, UK Biobank, opening its portals to the US insurance industry and random AI start-ups. I didn’t foresee this mainly because - when my late Dad asked me to join him in a generational study of heart disease - I read Biobank’s information leaflet and online FAQs and believed their assurances that they would never share patient data with commercial organisations. Turns out what they meant was ‘identifiable’ patient data, which made the assurances a bit redundant as privacy laws meant they couldn’t do that anyway. Still, it’s heart-warming to learn that our family health histories are boosting the value of organisations like ReMark International, a consultancy to the banking and insurance sectors, and Lydia.ai, who use ‘state of the art AI’ to provide ‘personalised and predictive health scores’, not to patients or doctors but to insurers wanting to ‘price’ the risks of people getting ill - with heart disease, perhaps. Also, it’s good we have helped Flying Troika, another AI company, towards their entirely-public-spirited goals of ‘fraud detection’ and 'identifying credit risk’.
Mind the data
What does this have to do with education? Well, first, if we want to educate people about AI risks, I think we need to discuss the ways AI corporations are moving into the public sector, and healthcare is a good place to look. And second, although learner data does not have anything like the value or coherence of healthcare data at present, this does not mean we can be complacent about it. UNESCO, in its 2022 policy review Minding the Data: learners’ privacy and security, calls edtech a ‘commercial ecosystem fuelled by learner data’ and points out that the lack of coherence in data management actually makes regulation harder:
more and more of the actors involved in the provision of education are online platforms, characterized by their ‘transnational nature’, and [that] also makes it more and more difficult for States to address their impact.
Also in 2022 the UK Digital Futures Commission identified four problems with learner data:
It is near impossible to discover what data is collected by EdTech
EdTech profits from children’s data while they learn
EdTech’s privacy policies and/or legal terms do not comply with data protection regulation
Regulation gives schools the responsibility but not the power to control EdTech data processing
The influence of ‘transnational platforms’ has only increased with the generative AI surge, an outpouring of opportunity that quickly congealed into a familiar shape: Microsoft/OpenAI, Meta, Alphabet, X/Tesla, of which two (MS and Alphabet/Google) were dominant already in education. Today, you could replace ‘EdTech’ with ‘Generative AI’ in all four problems. In fact the ‘responsibility without power’ problem is exactly what I called out in my review of the Russell Group guidance on GenAI in higher education.
There are new problems as well. One is the risk of learners introducing personal information into prompts. A common use of ChatGPT, for example, is to fine-tune a personal statement for university applications. Students may also turn to it for help with with a personal reflection, or an assignment that draws on real-world personal or professional material, the kind that teachers are encouraged to set, in part to encourage the use of original material that can’t (yet) be found in GenAI models. Besides these voluntary donations of personal material to the data model, there are also involuntary data trails. OpenAI’s latest privacy policy states that:
We may automatically collect information about your use of the Services, such as the types of content that you view or engage with, the features you use and the actions you take, as well as your time zone, country, the dates and times of access, user agent and version, type of computer or mobile device, and your computer connection.
This, I suppose, is what language models call ‘context’.
‘Safer AI’ round two
I called out some of the data problems in a post on the risks to student wellbeing from the early days of this stack. But I failed to notice, on the fringes of the UK’s ‘safer AI summit’ two weeks ago, a smouldering speech by Baroness Kidron, one of the authors of the 2022 Digital Futures Commission report and a global expert on data and young people’s rights. She had smelled the ‘AI exceptionalism’ and tech-tosterone wafting over from the Summit proper, and she had issues.
AI is not separate and different and the language we use to describe either its benefits or threat must make that clear. AI is … part of human built systems over which we still have agency. Who owns the AI, who benefits, who is responsible and who gets hurt – is at this point – still in question. The language that suggests that AI is too late and too difficult for us to deal with is a carryover of decades of a deliberate strategy of tech exceptionalism that has privatised the wealth of technology and outsourced the cost to society. Existential threat is the language of tech exceptionalism. It is tech exceptionalism that poses an existential threat to humanity, not the technology itself.
Before we turn to events at existential-central, here’s another gem I missed from the ‘Safer AI Summit’. In the two days leading up to the public Rishi x Elon fest, the Department for Education ran an ‘AI Hackathon’, the first step in a programme ‘to safely bring [AI]’s vast benefits to schools’. Despite the pre-hack hype (‘an important step in ensuring the UK remains at the forefront of AI globally’), and the welcome involvement of some actual teachers and students, Imperfect Offerings has found only one public outcome. A boosterish write-up on appeared on the web site of Faculty AI, reporting that ‘our data scientists worked with school students to explore the benefits and limitations of GenAI’. There follows a now-familiar list of the ‘benefits’, though no ‘limitations’ seem to have been found on this occasion. The write-up is only the start, however: the original DfE press release suggests Faculty AI will now be involved in ‘use case’ and ‘proof of concept’ stages, and Faculty’s Director of Government says ‘we’re excited to help get these tools into classrooms’.
So who are Faculty AI, and why are they at the centre of the Government’s ‘Safer’ AI strategy for schools? Some of you may be unkind enough to remember a spot of scandal when it emerged that a cabinet minister and Government adviser Dominic Cummings may have benefited from a contract awarded to Faculty to manage NHS data during the Covid19 pandemic. With Palantir, in fact. Funny old world. But Faculty clearly does not feel that this association had done them any harm. The Hackathon lead promotes her credentials as ‘two years in No 10 as senior policy adviser to the Prime Minister, running a large strategy team for the Cabinet Secretary’. She also provides a link for you to follow if you ‘want help with your AI strategy’ from this unusually well-connected bunch of hackers. Readers, I followed the link, and these were my options:
Do please share with me if you penetrate any further into Faculty AI’s offer to the education sector.
Cage fight for the future
And so, as promised, to future-central. We may never know exactly what happened between Sam Altman and the Board of OpenAI this week, but we have learned two things from the ruckus. One: the difference between OpenAI’s mission to ‘benefit all humanity’ and the commercial interests of Microsoft is no more than the 24 hours it takes for Microsoft to overturn any decision of the OpenAI board it doesn’t like. And two: whatever AI safety issue may be at stake, the only way to resolve it - in the interests of humanity, guys - is a public flounce-off between precious, white, male egos.
I’m sure this behaviour, like Musk’s previous offer to sort out his differences with Mark Zuckerberg in a cage fight, leaves all of humanity feeling safer. True, the Board of OpenAI is now just three of the original white guys, but apparently they are looking to recruit from the rest of humanity at some time in the near future.
Talking of recruitment, here’s another spin on graduate working futures from my summer forecast: a new loop of doom, if you will. This week Personnel Today reported that 28% of graduate recruiters are using AI in the selection process, mainly pre-screening and running psychometric tests, and many more say they plan to do so. On the other side of the table, the Institute of Student Employers reports research with students that found more than 70% would use generative AI when applying for a post: 47% would use it to complete applications, 39% to help with online tests, 37% during online interviews and 38% while attending assessment centres (which actually sounds quite difficult - perhaps they have pre-orders on the wearable AI pin that was launched this month?). It’s not surprising, with the number of applicants per graduate post rising, that 83% of recruiters found AI made the selection process faster. But with only 8% saying that it ‘enhanced the likelihood of finding the best candidate for the job’, we do seem to be in some kind of AI-driven race to the middle. Maybe with AI interviewing AI, the loop will eventually spit out the most suitable AI for the job.
More seriously, the ISE research found that paid versions of ChatGPT perform ‘significantly better and more consistently in cognitive-heavy assessments’ that are widely used in recruitment. No surprise there - these are exactly the metrics AI companies use to assess their own products. If AI support becomes integral to the testing and recruitment of people, the effects of pricing poorer students out of systems that are specifically designed to pass this kind of test will surely overwhelm any small levelling-up effects that may appear at the start. The Institute of Student Employers tells its members (and I think we are used to ‘existential’ pronouncements of this kind by now):
Failing to work through these issues and simply ignoring generative AI’s use in the application process could completely undermine your process.
So that’s the graduate recruitment loop. Here’s the wider loop it’s part of. A society that uses bot-designed tests and test-optimised bots to assess people’s capacities and worth can expect to see more and more money pouring into bots and bot developers and corporate minders. It can expect to see poor families putting themselves deeper into poverty so at least some of their members can gain access to these technologies and a chance of better work (where better also means botter, that is, more capable in relation to AI).
Finally, something I definitely saw coming this week was a conversation with Doug Belshaw, Laura Hilliger and Ian O’Byrne at We Are Open, and it did not disappoint. The topic was AI and media literacies, but we strayed into many other issues such as historical fiction, fan fiction, and walled gardens. I recommend all the interviews in this series and indeed all the work of the ‘We Are Open’ cooperative, which is generous, expansive and curious.
Please comment, like and subscribe to stay in the loop.
The idea of one suite of AI software interviewing another set for a job made me laugh, but I guess it's just another step along the road to the promised land of appointing candidates that are so unexceptionable that no-one needs to even know they are there.
Thanks for the kind words, Helen, and indeed agreeing to come on the podcast!
There's much to be cautious and sad about in what you share above, but this made me laugh: "Maybe with AI interviewing AI, the loop will eventually spit out the most suitable AI for the job."
Hiring is entirely broken, and I can only hope for results from recent moves to remove a) university degrees as a proxy to find 'the right sort of candidate', and b) CVs as the first step of the hiring process. I somewhat doubt that, though, and in fact what's beginning to replace both is probably worse.
Thanks for continuing to write so eloquently and provide so many links! One post from you adds to my reading list significantly 😅